A Non-Symbolic Theory of Conscious Content- Imagery and Activity
Nigel J.T. Thomas
Page 2
Source: http://www.imagery-imagination.com/nonsym.htm
But there were already other trends within AI, and other roots of cognitivism. Turing, after all, had suggested that intelligent behavior rather than human like inner processing would be the criterion of machine intelligence (Turing, 1950). In this vein, although still committed to symbolic computation, Minsky advocated AI as engineering rather than as psychology. Any means that could be devised to get machines to behave intelligently would be worth pursuing, regardless of whether they were the means used by humans (McCorduck, 1979). But although Minsky and his heirs did not want to depend on psychology for their inspiration, cognitive psychological theorists often did find inspiration in the ingenious programs that such engineers developed. If the relevant internal symbols and processing structures did not show up in introspection, that might mean no more that they reflected deeper, unconscious, (and probably more fundamental) mental processes than those being modeled at Carnegie-Mellon.
Chomsky's linguistic theories were an also an important influence on the formation of cognitive science, and reinforced the trend toward thinking of cognitive theory as being concerned with non-conscious symbol processing. Chomsky, famously, distinguished depth grammar, which is universal and innate, from the surface grammar of the languages that we actually speak, and consciously think in. Cognitive theories inspired by this picture would naturally focus on the computational representation of depth grammar structures, and on the transformation processes between depth and surface. We are, of course, conscious of neither of these things, but the natural language thoughts of which we are conscious appear, from this point of view, as little more than epiphenomena of the non-conscious representations and processes deeper down. The philosophical commitments of this view were spelled out in Fodor's classic The Language of Thought (1975). It is clear that the innate "mentalese" language that he proposes (i.e., the symbolic computational knowledge representation system) is not to be understood as consciously experienced. If it were, the book's subtle arguments would be quite superfluous!
It appears to me that this history has led to a continuing tension within traditional 'symbolic' cognitive science between seeing the computational symbol structures as equivalent to conscious thoughts, and seeing them as modeling processes that go on below the conscious level. The former tradition allows researchers to see and present themselves as working on explaining the mind in the sense in which it is pre-theoretically understood by most of us (i.e., as conscious mind). However, the latter view gives theorists and programmers much more freedom to apply their ingenuity, and it is the only possible view when one is tackling processes whose underlying mechanisms clearly are not conscious, such as perceptual processes. Thus, in practice, most symbolic programmers today probably give little attention to whether the symbols they work with are reasonable candidates for models of conscious contents of human minds. I want to suggest that they are wise to ignore this issue. It was a mistake from the first to think that such computational symbols, embodied, at the physical level, by the movements of electrons in silicon or ions in the brain, would somehow quicken and glow with refulgent consciousness(1) (or even intentionality - Cummins, 1989, 1996), once an intelligently behaving system was achieved.
This is not to say (I do not say) that such an artificial computational system might not actually be conscious, just that it is a mistake to expect the computational symbols it manipulates to be the very things of which it is conscious. It should that this mistake has been made by critics of AI at least as much as by its proponents. It is, surely, Searle's mistake in his notorious Chinese Room argument (1980). Searle correctly realizes that however intelligently a program behaves, and however intimate he may become with the symbols it manipulates, and their vicissitudes, he will never see them glow with the light of intrinsic intentionality(2). He concludes that such a program can never constitute a mind. Others, similarly disappointed by the failure to find the glow even in the brain, conclude that understanding consciousness is a problem too "hard" for cognitive science (or any other sort of natural science)(3). Yet others bang the table and beg to differ. They are all looking for intentionality and consciousness in the wrong place.
In fact, the practice of referring to computational symbols in cognitive systems as mental representations has been severely misleading. Once this point is taken on board, it should be apparent that non-representational cognitive theories are no worse off than are (properly interpreted) symbolic theories when it comes to explaining consciousness. None of them do so directly. That does not mean that they are not relevant. I believe that Turing's underlying insight was correct: if you can get the behavior right then, to all intents and purposes you have created a conscious mind. The proper task of Artificial Intelligence research is getting the behavior right. But what constitutes getting it right? Certainly not just fooling people in a teletyped conversation. In what follows I will attempt to sketch a theory of how conscious content, specifically mental imagery, might be explained in terms of certain behaviors. Imagery (which I take to include verbal imagery, inner speech) is the quintessential conscious thought content. What follows will focus mainly on vision and visual imagery, but with the understanding that equivalent considerations apply to all modalities. The theory draws upon work in situated robotics to some extent, but it is ultimately neutral about the engineering problem of how to get a machine to behave in the relevant ways, and about the biological problem of how to best explain how our brains get us to behave in the relevant ways. Symbolicists, dymanicists, connectionists, and the rest can continue to duke that out.
We Make it Easy to Succeed
Successwaves, Intl.
Brain Based Accelerated Success Audios
![]() |